64 research outputs found
Recommended from our members
The Unified Theory of Pseudorandomness
Pseudorandomness is the theory of efficiently generating objects that look "random" despite being constructed with little or no randomness. One of the achievements of this research area has been the realization that a number of fundamental and widely studied "pseudorandom" objects are all almost equivalent when viewed appropriately. These objects include pseudorandom generators, expander graphs, list-decodable error-correcting codes, averaging samplers, and hardness amplifi ers. In this survey, we describe the connections between all of these objects, showing how they can all be cast within a single "list-decoding framework" that brings out both their similarities and differences.Engineering and Applied Science
Recommended from our members
On Transformations of Interactive Proofs that Preserve the Prover's Complexity
Goldwasser and Sipser [GS89] proved that every interactive proof system can be transformed into a public-coin one (a.k.a., an Arthur-Merlin game). Their transformation has the drawback that the computational complexity of the prover's strategy is not preserved. We show that this is inherent, by proving that the same must be true of any transformation which only uses the original prover and verifier strategies as "black boxes". Our negative result holds even if the original proof system is restricted to be honest-verifier perfect zero knowledge and the transformation can also use the simulator as a black box.
We also examine a similar deficiency in a transformation of Fürer et al. [FGM+89] from interactive proofs to ones with perfect completeness. We argue that the increase in prover complexity incurred by their transformation is necessary, given that their construction is a black-box transformation which works regardless of the verifier's computational complexity.Engineering and Applied Science
Recommended from our members
A Complete Problem for Statistical Zero Knowledge
We present the first complete problem for SZK, the class of (promise) problems possessing statistical zero-knowledge proofs (against an honest verifier). The problem, called STATISTICAL DIFFERENCE, is to decide whether two efficiently samplable distributions are either statistically close or far apart. This gives a new characterization of SZK that makes no reference to interaction or zero knowledge.
We propose the use of complete problems to unify and extend the study of statistical zero knowledge. To this end, we examine several consequences of our Completeness Theorem and its proof, such as:
* A way to make every (honest-verifier) statistical zero-knowledge proof very communication efficient, with the prover sending only one bit to the verifier (to achieve soundness error 1/2).
* Simpler proofs of many of the previously known results about statistical zero knowledge, such as the Fortnow and Aiello--Håstad upper bounds on the complexity of SZK and Okamoto's result that SZK is closed under complement.
* Strong closure properties of SZK which amount to constructing statistical zero-knowledge proofs for complex assertions built out of simpler assertions already shown to be in SZK.
* New results about the various measures of "knowledge complexity," including a collapse in the hierarchy corresponding to knowledge complexity in the "hint" sense.
* Algorithms for manipulating the statistical difference between efficiently samplable distributions, including transformations which "polarize" and "reverse" the statistical relationship between a pair of distributions.Engineering and Applied Science
Recommended from our members
A Lower Bound on List Size for List Decoding
A q-ary error-correcting code C ⊆ {1,2,...,q}n is said to be list decodable to radius ρ with list size L if every Hamming ball of radius ρ contains at most L codewords of C. We prove that in order for a q -ary code to be list-decodable up to radius (1-1/q)(1- ε)n, we must have L = Ω(1/ ε2) . Specifically, we prove that there exists a constant cq > 0 and a function fq such that for small enough ε > 0, if C is list-decodable to radius (1-1/q)(1- ε)n with list size cq/ ε2, then C has at most fq( ε) codewords, independent of n . This result is asymptotically tight (treating q as a constant), since such codes with an exponential (in n ) number of codewords are known for list size L = O(1/ ε2). A result similar to ours is implicit in Blinovsky ( Problems of Information Transmission, 1986) for the binary (q=2) case. Our proof is simpler and works for all alphabet sizes, and provides more intuition for why the lower bound arises.Engineering and Applied Science
Recommended from our members
Comparing Entropies in Statistical Zero Knowledge with Applications to the Structure of SZK
We consider the following (promise) problem, denoted ED (for Entropy Difference): The input is a pairs of circuits, and YES instances (resp., NO instances) are such pairs in which the first (resp., second) circuit generates a distribution with noticeably higher entropy.
On one hand we show that any language having a (honest-verifier) statistical zero-knowledge proof is Karp-reducible to ED. On the other hand, we present a public-coin (honest-verifier) statistical zero-knowledge proof for ED. Thus, we obtain an alternative proof of Okamoto's result by which HVSZK (i.e., honest-verifier statistical zero knowledge) equals public-coin HVSZK. The new proof is much simpler than the original one. The above also yields a trivial proof that HVSZK is closed under complementation (since ED easily reduces to its complement). Among the new results obtained is an equivalence of a weak notion of statistical zero knowledge to the standard one.Engineering and Applied Science
Recommended from our members
Extracting Randomness from Samplable Distributions
The standard notion of a randomness extractor is a procedure which converts any weak source of randomness into an almost uniform distribution. The conversion necessarily uses a small amount of pure randomness, which can be eliminated by complete enumeration in some, but not all, applications.
Here, we consider the problem of deterministically converting a weak source of randomness into an almost uniform distribution. Previously, deterministic extraction procedures were known only for sources satisfying strong independence requirements. In this paper, we look at sources which are samplable, i.e., can be generated by an efficient sampling algorithm. We seek an efficient deterministic procedure that, given a sample from any samplable distribution of sufficiently large min-entropy, gives an almost uniformly distributed output. We explore the conditions under which such deterministic extractors exist.
We observe that no deterministic extractor exists if the sampler is allowed to use more computational resources than the extractor. On the other hand, if the extractor is allowed (polynomially) more resources than the sampler, we show that deterministic extraction becomes possible. This is true unconditionally in the nonuniform setting (i.e., when the extractor can be computed by a small circuit), and (necessarily) relies on complexity assumptions in the uniform setting.
One of our uniform constructions is as follows: assuming that there are problems in E=DTIME(2^{{O(n)}) that are not solvable by subexponential-size circuits with Sigma_6 gates, there is an efficient extractor that transforms any samplable distribution of length n and min-entropy (1-gamma)n into an output distribution of length (1-O(gamma))n, where gamma is any sufficiently small constant. The running time of the extractor is polynomial in n and the circuit complexity of the sampler. These extractors are based on a connection between deterministic extraction from samplable distributions and hardness against nondeterministic circuits, and on the use of nondeterminism to substantially speed up "list decoding" algorithms for error-correcting codes such as multivariate polynomial codes and Hadamard-like codes.Engineering and Applied Science
Recommended from our members
Checking Polynomial Identities over any Field: Towards a Derandomization?
We present a Monte Carlo algorithm for testing multivariate polynomial identities over any field using fewer random bits than other methods. To test if a polynomial is zero, our method uses random bits, where is the degree of in , to obtain any inverse polynomial error in polynomial time. The algorithm applies to polynomials given as a black box or in some implicit representation such as a straight-line program. Our method works by evaluating P at truncated formal power series representing square roots of irreducible polynomials over the field. This approach is similar to that of Chen and Kao (STOC '97), but with the advantage that the techniques are purely algebraic and apply to any field. We also prove a lower bound showing that the number of random bits used by our algorithm is essentially optimal in the black-box model.Engineering and Applied Science
Recommended from our members
Are PCPs Inherent in Efficient Arguments?
Starting with Kilian (STOC ‘92), several works have shown how to use probabilistically checkable proofs (PCPs) and cryptographic primitives such as collision-resistant hashing to construct very efficient argument systems (a.k.a. computationally sound proofs), for example with polylogarithmic communication complexity. Ishai et al. (CCC ‘07) raised the question of whether PCPs are inherent in efficient arguments, and to what extent. We give evidence that they are, by showing how to convert any argument system whose soundness is reducible to the security of some cryptographic primitive into a PCP system whose efficiency is related to that of the argument system and the reduction (under certain complexity assumptions).Engineering and Applied Science
Recommended from our members
On Extractors and Exposure-Resilient Functions for Sublogarithmic Entropy
We study resilient functions and exposure-resilient functions in the low-entropy regime. A resilient function (a.k.a. deterministic extractor for oblivious bit-fixing sources) maps any distribution on n -bit strings in which k bits are uniformly random and the rest are fixed into an output distribution that is close to uniform. With exposure-resilient functions, all the input bits are random, but we ask that the output be close to uniform conditioned on any subset of n - k input bits. In this paper, we focus on the case that k is sublogarithmic in n.
We simplify and improve an explicit construction of resilient functions for k sublogarithmic in n due to Kamp and Zuckerman (SICOMP 2006), achieving error exponentially small in k rather than polynomially small in k. Our main result is that when k is sublogarithmic in n, the short output length of this construction (O(log k) output bits) is optimal for extractors computable by a large class of space-bounded streaming algorithms.
Next, we show that a random function is a resilient function with high probability if and only if k is superlogarithmic in n, suggesting that our main result may apply more generally. In contrast, we show that a random function is a static (resp. adaptive) exposure-resilient function with high probability even if k is as small as a constant (resp. loglog n). No explicit exposure-resilient functions achieving these parameters are known.Engineering and Applied SciencesMathematic
- …